Goto

Collaborating Authors

 patient similarity


Improving Forecasts of Suicide Attempts for Patients with Little Data

Hang, Genesis, Chen, Annie, Neveux, Hope, Nock, Matthew K., Yacoby, Yaniv

arXiv.org Machine Learning

Ecological Momentary Assessment provides real-time data on suicidal thoughts and behaviors, but predicting suicide attempts remains challenging due to their rarity and patient heterogeneity. We show that single models fit to all patients perform poorly, while individualized models improve performance but still overfit to patients with limited data. To address this, we introduce Latent Similarity Gaussian Processes (LSGPs) to capture patient heterogeneity, enabling those with little data to leverage similar patients' trends. Preliminary results show promise: even without kernel-design, we outperform all but one baseline while offering a new understanding of patient similarity.


Patient Similarity Computation for Clinical Decision Support: An Efficient Use of Data Transformation, Combining Static and Time Series Data

Sana, Joydeb Kumar, Masud, Mohammad M., Rahman, M Sohel, Rahman, M Saifur

arXiv.org Artificial Intelligence

Patient similarity computation (PSC) is a fundamental problem in healthcare informatics. The aim of the patient similarity computation is to measure the similarity among patients according to their historical clinical records, which helps to improve clinical decision support. This paper presents a novel distributed patient similarity computation (DPSC) technique based on data transformation (DT) methods, utilizing an effective combination of time series and static data. Time series data are sensor-collected patients' information, including metrics like heart rate, blood pressure, Oxygen saturation, respiration, etc. The static data are mainly patient background and demographic data, including age, weight, height, gender, etc. Static data has been used for clustering the patients. Before feeding the static data to the machine learning model adaptive Weight-of-Evidence (aWOE) and Z-score data transformation (DT) methods have been performed, which improve the prediction performances. In aWOE-based patient similarity models, sensitive patient information has been processed using aWOE which preserves the data privacy of the trained models. We used the Dynamic Time Warping (DTW) approach, which is robust and very popular, for time series similarity. However, DTW is not suitable for big data due to the significant computational run-time. To overcome this problem, distributed DTW computation is used in this study. For Coronary Artery Disease, our DT based approach boosts prediction performance by as much as 11.4%, 10.20%, and 12.6% in terms of AUC, accuracy, and F-measure, respectively. In the case of Congestive Heart Failure (CHF), our proposed method achieves performance enhancement up to 15.9%, 10.5%, and 21.9% for the same measures, respectively. The proposed method reduces the computation time by as high as 40%.


PRISM: Leveraging Prototype Patient Representations with Feature-Missing-Aware Calibration for EHR Data Sparsity Mitigation

Zhu, Yinghao, Wang, Zixiang, He, Long, Xie, Shiyun, Ma, Liantao, Pan, Chengwei

arXiv.org Artificial Intelligence

Electronic Health Record (EHR) data, while rich in information, often suffers from sparsity, posing significant challenges in predictive modeling. Traditional imputation methods inadequately distinguish between real and imputed data, leading to potential inaccuracies in models. Addressing this, we introduce PRISM, a novel approach that indirectly imputes data through prototype representations of similar patients, thus ensuring denser and more accurate embeddings. PRISM innovates further with a feature confidence learner module, which evaluates the reliability of each feature in light of missing data. Additionally, it incorporates a novel patient similarity metric that accounts for feature confidence, avoiding overreliance on imprecise imputed values. Our extensive experiments on the MIMIC-III and MIMIC-IV datasets demonstrate PRISM's superior performance in predicting in-hospital mortality and 30-day readmission tasks, showcasing its effectiveness in handling EHR data sparsity. For the sake of reproducibility and further research, we have made the code publicly available at https://github.com/yhzhu99/PRISM.


Improving ICD-based semantic similarity by accounting for varying degrees of comorbidity

Schneider, Jan Janosch, Adler, Marius, Ammer-Herrmenau, Christoph, König, Alexander Otto, Sax, Ulrich, Hügel, Jonas

arXiv.org Artificial Intelligence

Finding similar patients is a common objective in precision medicine, facilitating treatment outcome assessment and clinical decision support. Choosing widely-available patient features and appropriate mathematical methods for similarity calculations is crucial. International Statistical Classification of Diseases and Related Health Problems (ICD) codes are used worldwide to encode diseases and are available for nearly all patients. Aggregated as sets consisting of primary and secondary diagnoses they can display a degree of comorbidity and reveal comorbidity patterns. It is possible to compute the similarity of patients based on their ICD codes by using semantic similarity algorithms. These algorithms have been traditionally evaluated using a single-term expert rated data set. However, real-word patient data often display varying degrees of documented comorbidities that might impair algorithm performance. To account for this, we present a scale term that considers documented comorbidity-variance. In this work, we compared the performance of 80 combinations of established algorithms in terms of semantic similarity based on ICD-code sets. The sets have been extracted from patients with a C25.X (pancreatic cancer) primary diagnosis and provide a variety of different combinations of ICD-codes. Using our scale term we yielded the best results with a combination of level-based information content, Leacock & Chodorow concept similarity and bipartite graph matching for the set similarities reaching a correlation of 0.75 with our expert's ground truth. Our results highlight the importance of accounting for comorbidity variance while demonstrating how well current semantic similarity algorithms perform.


A Study into patient similarity through representation learning from medical records

Memarzadeh, Hoda, Ghadiri, Nasser, Samwald, Matthias, Shahreza, Maryam Lotfi

arXiv.org Artificial Intelligence

Patient similarity assessment, which identifies patients similar to a given patient, can help improve medical care. The assessment can be performed using Electronic Medical Records (EMRs). Patient similarity measurement requires converting heterogeneous EMRs into comparable formats to calculate their distance. While versatile document representation learning methods have been developed in recent years, it is still unclear how complex EMR data should be processed to create the most useful patient representations. This study presents a new data representation method for EMRs that takes the information in clinical narratives into account. To address the limitations of previous approaches in handling complex parts of EMR data, an unsupervised method is proposed for building a patient representation, which integrates unstructured data with structured data extracted from patients' EMRs. In order to model the extracted data, we employed a tree structure that captures the temporal relations of multiple medical events from EMR. We processed clinical notes to extract symptoms, signs, and diseases using different tools such as medspaCy, MetaMap, and scispaCy and mapped entities to the Unified Medical Language System (UMLS). After creating a tree data structure, we utilized two novel relabeling methods for the non-leaf nodes of the tree to capture two temporal aspects of the extracted events. By traversing the tree, we generated a sequence that could create an embedding vector for each patient. The comprehensive evaluation of the proposed method for patient similarity and mortality prediction tasks demonstrated that our proposed model leads to lower mean squared error (MSE), higher precision, and normalized discounted cumulative gain (NDCG) relative to baselines. Patient similarity analytics, Patient representation learning, Natural language processing, Health informatics 1 Introduction The patient similarity assessment identifies patients similar to a given patient. It allows physicians to gain insights into the records of matching patients and provide better treatments. Calculating patient similarity requires measuring the distance between patients within a population (1). A distance could be calculated based on various structured and unstructured data types in an electronic medical record (EMR). EMRs can be processed in the same way as general documents modeled as sequences of words. The difference is that EMRs are sequences of patient events, such as diagnoses, procedures, and medications. The representation of an EMR is a low-dimension and fixed-length embedding vector, so it can be used as an indicator to measure similarity between patients, simply like a representation of a document that can be applied to measure similarity between notes. Among previous works on patient representations based on EMRs, some have relied on structured data types (2-6), while others have only used unstructured data (7,8).


Representation Learning of EHR Data via Graph-Based Medical Entity Embedding

Wu, Tong, Wang, Yunlong, Wang, Yue, Zhao, Emily, Yuan, Yilian, Yang, Zhi

arXiv.org Machine Learning

Automatic representation learning of key entities in electronic health record (EHR) data is a critical step for healthcare informatics that turns heterogeneous medical records into structured and actionable information. Here we propose ME2Vec, an algorithmic framework for learning low-dimensional vectors of the most common entities in EHR: medical services, doctors, and patients. ME2Vec leverages diverse graph embedding techniques to cater for the unique characteristic of each medical entity. Using real-world clinical data, we demonstrate the efficacy of ME2Vec over competitive baselines on disease diagnosis prediction.


Measuring Patient Similarities via a Deep Architecture with Medical Concept Embedding

Zhu, Zihao, Yin, Changchang, Qian, Buyue, Cheng, Yu, Wei, Jishang, Wang, Fei

arXiv.org Machine Learning

Evaluating the clinical similarities between pairwise patients is a fundamental problem in healthcare informatics. A proper patient similarity measure enables various downstream applications, such as cohort study and treatment comparative effectiveness research. One major carrier for conducting patient similarity research is Electronic Health Records(EHRs), which are usually heterogeneous, longitudinal, and sparse. Though existing studies on learning patient similarity from EHRs have shown being useful in solving real clinical problems, their applicability is limited due to the lack of medical interpretations. Moreover, most previous methods assume a vector-based representation for patients, which typically requires aggregation of medical events over a certain time period. As a consequence, temporal information will be lost. In this paper, we propose a patient similarity evaluation framework based on the temporal matching of longitudinal patient EHRs. Two efficient methods are presented, unsupervised and supervised, both of which preserve the temporal properties in EHRs. The supervised scheme takes a convolutional neural network architecture and learns an optimal representation of patient clinical records with medical concept embedding. The empirical results on real-world clinical data demonstrate substantial improvement over the baselines. We make our code and sample data available for further study.